181 research outputs found

    Quantifying Self-Organization with Optimal Predictors

    Full text link
    Despite broad interest in self-organizing systems, there are few quantitative, experimentally-applicable criteria for self-organization. The existing criteria all give counter-intuitive results for important cases. In this Letter, we propose a new criterion, namely an internally-generated increase in the statistical complexity, the amount of information required for optimal prediction of the system's dynamics. We precisely define this complexity for spatially-extended dynamical systems, using the probabilistic ideas of mutual information and minimal sufficient statistics. This leads to a general method for predicting such systems, and a simple algorithm for estimating statistical complexity. The results of applying this algorithm to a class of models of excitable media (cyclic cellular automata) strongly support our proposal.Comment: Four pages, two color figure

    Quantifying the complexity of random Boolean networks

    Full text link
    We study two measures of the complexity of heterogeneous extended systems, taking random Boolean networks as prototypical cases. A measure defined by Shalizi et al. for cellular automata, based on a criterion for optimal statistical prediction [Shalizi et al., Phys. Rev. Lett. 93, 118701 (2004)], does not distinguish between the spatial inhomogeneity of the ordered phase and the dynamical inhomogeneity of the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing complexity values for networks in the ordered and critical regimes and for highly disordered networks, peaking somewhere in the disordered regime. Individual nodes with high complexity are the ones that pass the most information from the past to the future, a quantity that depends in a nontrivial way on both the Boolean function of a given node and its location within the network.Comment: 8 pages, 4 figure

    Hawkes process as a model of social interactions: a view on video dynamics

    Get PDF
    We study by computer simulation the "Hawkes process" that was proposed in a recent paper by Crane and Sornette (Proc. Nat. Acad. Sci. USA 105, 15649 (2008)) as a plausible model for the dynamics of YouTube video viewing numbers. We test the claims made there that robust identification is possible for classes of dynamic response following activity bursts. Our simulated timeseries for the Hawkes process indeed fall into the different categories predicted by Crane and Sornette. However the Hawkes process gives a much narrower spread of decay exponents than the YouTube data, suggesting limits to the universality of the Hawkes-based analysis.Comment: Added errors to parameter estimates and further description. IOP style, 13 pages, 5 figure

    Dynamic communities in multichannel data: An application to the foreign exchange market during the 2007--2008 credit crisis

    Full text link
    We study the cluster dynamics of multichannel (multivariate) time series by representing their correlations as time-dependent networks and investigating the evolution of network communities. We employ a node-centric approach that allows us to track the effects of the community evolution on the functional roles of individual nodes without having to track entire communities. As an example, we consider a foreign exchange market network in which each node represents an exchange rate and each edge represents a time-dependent correlation between the rates. We study the period 2005-2008, which includes the recent credit and liquidity crisis. Using dynamical community detection, we find that exchange rates that are strongly attached to their community are persistently grouped with the same set of rates, whereas exchange rates that are important for the transfer of information tend to be positioned on the edges of communities. Our analysis successfully uncovers major trading changes that occurred in the market during the credit crisis.Comment: 8 pages, 6 figures, accepted for publication in Chao

    The Visibility Graph: a new method for estimating the Hurst exponent of fractional Brownian motion

    Full text link
    Fractional Brownian motion (fBm) has been used as a theoretical framework to study real time series appearing in diverse scientific fields. Because its intrinsic non-stationarity and long range dependence, its characterization via the Hurst parameter H requires sophisticated techniques that often yield ambiguous results. In this work we show that fBm series map into a scale free visibility graph whose degree distribution is a function of H. Concretely, it is shown that the exponent of the power law degree distribution depends linearly on H. This also applies to fractional Gaussian noises (fGn) and generic f^(-b) noises. Taking advantage of these facts, we propose a brand new methodology to quantify long range dependence in these series. Its reliability is confirmed with extensive numerical simulations and analytical developments. Finally, we illustrate this method quantifying the persistent behavior of human gait dynamics.Comment: 5 pages, submitted for publicatio

    Concentrating tripartite quantum information

    Get PDF
    We introduce the concentrated information of tripartite quantum states. For three parties Alice, Bob, and Charlie, it is defined as the maximal mutual information achievable between Alice and Charlie via local operations and classical communication performed by Charlie and Bob. We derive upper and lower bounds to the concentrated information, and obtain a closed expression for it on several classes of states including arbitrary pure tripartite states in the asymptotic setting. We show that distillable entanglement, entanglement of assistance, and quantum discord can all be expressed in terms of the concentrated information, thus revealing its role as a unifying informational primitive. We finally investigate quantum state merging of mixed states with and without additional entanglement. The gap between classical and quantum concentrated information is proven to be an operational figure of merit for mixed state merging in absence of additional entanglement. Contrary to pure state merging, our analysis shows that classical communication in both directions can provide advantage for merging of mixed states

    Homophily and Contagion Are Generically Confounded in Observational Social Network Studies

    Full text link
    We consider processes on social networks that can potentially involve three factors: homophily, or the formation of social ties due to matching individual traits; social contagion, also known as social influence; and the causal effect of an individual's covariates on their behavior or other measurable responses. We show that, generically, all of these are confounded with each other. Distinguishing them from one another requires strong assumptions on the parametrization of the social process or on the adequacy of the covariates used (or both). In particular we demonstrate, with simple examples, that asymmetries in regression coefficients cannot identify causal effects, and that very simple models of imitation (a form of social contagion) can produce substantial correlations between an individual's enduring traits and their choices, even when there is no intrinsic affinity between them. We also suggest some possible constructive responses to these results.Comment: 27 pages, 9 figures. V2: Revised in response to referees. V3: Ditt

    On Hilberg's Law and Its Links with Guiraud's Law

    Full text link
    Hilberg (1990) supposed that finite-order excess entropy of a random human text is proportional to the square root of the text length. Assuming that Hilberg's hypothesis is true, we derive Guiraud's law, which states that the number of word types in a text is greater than proportional to the square root of the text length. Our derivation is based on some mathematical conjecture in coding theory and on several experiments suggesting that words can be defined approximately as the nonterminals of the shortest context-free grammar for the text. Such operational definition of words can be applied even to texts deprived of spaces, which do not allow for Mandelbrot's ``intermittent silence'' explanation of Zipf's and Guiraud's laws. In contrast to Mandelbrot's, our model assumes some probabilistic long-memory effects in human narration and might be capable of explaining Menzerath's law.Comment: To appear in Journal of Quantitative Linguistic

    Reductions of Hidden Information Sources

    Full text link
    In all but special circumstances, measurements of time-dependent processes reflect internal structures and correlations only indirectly. Building predictive models of such hidden information sources requires discovering, in some way, the internal states and mechanisms. Unfortunately, there are often many possible models that are observationally equivalent. Here we show that the situation is not as arbitrary as one would think. We show that generators of hidden stochastic processes can be reduced to a minimal form and compare this reduced representation to that provided by computational mechanics--the epsilon-machine. On the way to developing deeper, measure-theoretic foundations for the latter, we introduce a new two-step reduction process. The first step (internal-event reduction) produces the smallest observationally equivalent sigma-algebra and the second (internal-state reduction) removes sigma-algebra components that are redundant for optimal prediction. For several classes of stochastic dynamical systems these reductions produce representations that are equivalent to epsilon-machines.Comment: 12 pages, 4 figures; 30 citations; Updates at http://www.santafe.edu/~cm

    Complex temporal patterns in molecular dynamics:a direct measure of the phase-space exploration by the trajectory at macroscopic time scales

    Get PDF
    Computer simulated trajectories of bulk water molecules form complex spatiotemporal structures at the picosecond time scale. This intrinsic complexity, which underlies the formation of molecular structures at longer time scales, has been quantified using a measure of statistical complexity. The method estimates the information contained in the molecular trajectory by detecting and quantifying temporal patterns present in the simulated data (velocity time series). Two types of temporal patterns are found. The first, defined by the short-time correlations corresponding to the velocity autocorrelation decay times (â‰0.1â€ps), remains asymptotically stable for time intervals longer than several tens of nanoseconds. The second is caused by previously unknown longer-time correlations (found at longer than the nanoseconds time scales) leading to a value of statistical complexity that slowly increases with time. A direct measure based on the notion of statistical complexity that describes how the trajectory explores the phase space and independent from the particular molecular signal used as the observed time series is introduced
    • …
    corecore